28 February 2024

Risky Recommendations

The European Commission Recommends Big Tech Measures to Mitigate “Systemic Risks” for Electoral Processes

2024 will see numerous elections, including the European Parliament Elections in June. As part of its overall aim to establish a safe, predictable and trusted online environment, the Digital Services Act (DSA) obliges providers of very large online platforms (VLOPs) and very large online search engines (VLOSEs) to assess and mitigate systemic risks for “electoral processes” (Art. 34(1)(c) DSA). According to Art. 35(3) DSA, the Commission may issue guidelines on how to address specific risks. On this basis, the Commission published Draft Guidelines on the mitigation of systemic risks for electoral processes and sought feedback from all relevant stakeholders. The Commission “strongly encourages” providers of VLOPs and VLOSEs to “swiftly follow” the Guidelines (para 53). This encouragement is backed by the strong enforcement powers that the Commission has already started to execute, in particular against X. VLOP and VLOSE providers who want to avoid the direct and indirect (reputational) costs of DSA proceedings are therefore well-advised to adopt the main mitigation measures proposed by the Commission. These include internal steps to identify and address election-related risks before, during and after an electoral event, cooperation with authorities and private DSA actors such as fact checkers and specific mitigations concerning generative AI content.

While the protection of election integrity is a laudable aim, the Guidelines as proposed would not rebuild but further erode citizen trust in the digital environment and democratic processes. The recommendations are too vague, too broad and too lenient as regards the suggested cooperation between Big Tech, civil society and public authorities.

Lack of Definitions

A document that is meant to provide legal guidance has to define its terminology. The Draft already falls short here. It does not define any of its core concepts, be it electoral processes, systemic risks, foreign information manipulation and interference (FIMI) or disinformation. Proponents of an effective regulatory framework might consider this lacuna rather a feature than a bug. The problematic consequence from a legal point of view is, however, that some of the proposed measures are then insufficiently tailored to the specific risk for electoral processes, as required by Art. 35(1) DSA. For example, recommendations to promote the availability of “trustworthy information from pluralistic sources” also outside of “election times” (para 24) and to develop and apply “inoculation measures that pre-emptively build psychological resistance to possible and expected disinformation narratives” (para 16(b)(ii)) go beyond the protection of the integrity of a particular election and should be deleted. Moreover, the Guidelines should spell out whether and in how far they apply to information that is not illegal in the sense of Art. 3(h) DSA. If that is the case, the Commission enters unchartered legal territory (cf. Peukert 2023).

Proportionality?

The Draft Guidelines also fail to concretise the proportionality of mitigation measures (cf. Art. 35(1) DSA), for example in the form of a tiered response program. The basic, permanent level of mitigation measures should be limited to the identification of systemic risks for electoral processes and the availability of transparency measures such as labels. Content moderation measures should only be applied once a FIMI/disinformation campaign with the potential to negatively affect a concrete electoral process as a whole has been identified. These content moderation measures should again be escalating in the sense that the intensity of the interference has to correspond to the spread and thus risk of a FIMI/disinformation campaign. Purely informational interventions such as prompts or notes should take precedence over more intrusive measures such as circuit breakers and demonetization. Removing content should always be a last resort.

Cooperation with Private DSA Actors

A key feature of the Draft Guidelines is the recommendation that VLOPs and VLOSEs should cooperate with “independent civil society organisations, researchers and fact-checkers” (para 12), unspecified “external experts” (para 13), “election related initiatives and campaigns” (para 16(b)) and “other relevant stakeholders” (para 18). Despite the important role of these private DSA actors, the Guidelines do not suggest any specific measure to make the multi-stakeholder cooperation transparent to the European public. This secrecy of mitigation measures runs contrary to the overall aim of the DSA to establish a safe, predictable and trusted online environment (Art. 1(1) DSA). The highly sensitive moderation of election-related content will not be trusted by European citizens if it is not known who is influencing the decisions. Therefore, private DSA actors who are involved in the management of often lawful content in the context of elections should at least be subject to the rules set out in Article 22 DSA for “trusted flaggers” notifying illegal content. Consequently, VLOPs and VLOSEs should make the names, contact details, corporate structure and funding sources of private cooperation partners publicly available in an up-to-date database (cf. Art. 22(5) DSA). Given the sensitivity of the matter, the Guidelines should also recommend that actors engaging with VLOPs/VLOSEs must be functionally independent of any public body, including the European Commission (cf. Art. 30(1) Audiovisual Media Services Directive). Such independence not only implies that VLOP/VLOSE cooperation partners are legally distinct from any government, but also that they do not rely on public funding to any significant extent. As a rule, private DSA actors should not receive more than one third of their annual funding from public sources.

Another requirement that “trusted flaggers” of illegal content have to meet is that they are established in a Member State (cf. Art. 22(2) DSA). The Draft Guidelines do not incorporate this requirement. On the contrary, many of the organisations referred to in the Draft Guidelines are based in third countries, mainly in the U.S. This concerns, for example, Accountable Tech, the Integrity Institute and the most important source raters (cf. para 16(c)(v) and Peukert 2023). Such a strong role of U.S. based actors is incompatible with the aim to minimise the risk of interference in European elections from third countries as set out in Recommendation 2023/2829 on inclusive and resilient electoral processes in the Union. Giving a key role in moderating the European digital public sphere to third country actors, e.g. source raters such as NewsGuard, also contradicts the aim of the DSA to ensure that the online environment in the EU is fully compliant with EU and Member State laws (cf. recitals 1 and 3 and Art. 2(1) DSA). It is indeed strange that the DSA imposes far-reaching obligations on U.S.-based VLOPs and VLOSEs to protect the EU’s legal order and digital sovereignty, while at the same time a Guideline under this act gives a free pass to U.S. (and Chinese) Big Tech companies to cooperate with U.S. actors, who are often closely linked to U.S. government agencies (cf. Peukert 2023), in the development and implementation of risk mitigation measures of utmost importance to European democracies.

Cooperation with National Authorities

On top of this, the Draft Guidelines also recommend that VLOP and VLOSE providers cooperate with “responsible national authorities”. Ironically, one relevant paragraph reads like a summary of the facts currently pending before the U.S. Supreme Court in the case Murthy v. Missouri:

“Especially during an election campaign, the Commission recommends that providers of VLOPs and VLOSEs establish efficient and timely communication with the authorities with swift, efficient and appropriate follow-up mechanisms to issues flagged. … For the sake of efficacy, the Commission recommends that the communication be streamlined via pre-established points of contact (and/or a limited number of points of contact) on both sides. In order to improve the effectiveness of the mitigation measures taken, providers of VLOPs and VLOSEs should maintain records of their interactions with authorities, including any requests made and actions taken by the companies in response.”

This suggestion should be dropped. The DSA does not provide a legal basis for orders by national authorities to take action against lawful content, even if such content poses a systemic risk to electoral processes (cf. Art. 9 DSA). The suppression of lawful content upon informal “requests” of a public authority can and should be attributed to that authority (cf. CJEU Judgment of 26 April 2022, Poland/Parliament and Council, paras 39-58). As this interference with freedom of expression is not prescribed by law, it would be unconstitutional. Furthermore, the recommendation to communicate informally with national authorities conflicts with the exclusive powers of the Commission to supervise and enforce the risk management obligation pursuant to Art. 35 DSA (Art. 56(2) DSA).

The DSA Does Not Regulate the Creation of Generative AI Content

The Draft Guidelines also make recommendations concerning risks related to the creation of false or misleading generative AI content (AI hallucinations, deep fakes). This proposal lacks a legal basis too. Chapter III Section 5 DSA only applies to online platforms and search engines as defined in Art. 3(h) and (j) DSA. An online platform may store and disseminate AI generated content at the request of a recipient of the service, but it is not the functionality of an online platform to generate such content in the first place. For example, the “My AI” chatbot available on Snapchat is not a hosting service covered by the DSA. The same is true when a search engine returns AI generated content as a result of a search query, because that content was already available on an indexed website. A so-called “conversational search engine”, which uses natural language predictive text to answer queries, does not qualify as a search engine within the meaning of Art. 3(j) DSA either, because it does not “return … results in any format in which information related to the requested content can be found”. Finding existing data is not the same as creating new data. Art. 3(j) DSA targets conventional search engines like Google Search and Bing, but not so-called chatbot services like the Microsoft Copilot or Google Gemini – a type of service that triggered regulatory responses only after the adoption of the DSA. The appropriate legal basis for addressing the risks associated with the creation of generative AI content is the forthcoming AI Act. By addressing the creation of generative AI content, the Draft Guidelines confuse the scope of application of the DSA with that of the AI Act, thereby creating legal uncertainty. As a result, the Guidelines should clarify that they only regulate risks related to the dissemination, but not the creation, of generative AI content.

Cooperation with the Digital Service Coordinators

The Draft Guidelines do not mention the fact that Art. 35(3) empowers the Commission to issue guidelines “in cooperation with the Digital Service Coordinators” (DSCs). In my view, Art. 35(3) DSA presupposes that the Commission first consults all DSCs of all Member States and incorporates their viewpoints in the final Guidelines. This follows from the wording of Art. 35(3) 1st sentence DSA when compared to other provisions on Commission guidelines (Arts. 22(8), 25(3), 28(4), 39(3) DSA). In contrast to Arts. 22(8), 28(4) and 39(3) DSA, the Commission need not consult the European Board for Digital Services, which can perform its tasks also if one or more Member States have not designated a DSC (Art. 62(1) DSA), and which adopts its acts by simple majority of votes (Art. 62(3) DSA). Instead, Art. 35(3) DSA requires that the Commission has to cooperate with “the Digital Service Coordinators”, that is the plurality of all DSCs representing all Member States. Such a strict interpretation of Art. 35(3) DSA reflects the exceptional importance of the subject matter of Art. 35 Guidelines. The mitigation of systemic risks for electoral processes touches on the very fabric of the European digital public sphere and the national and EU democratic systems. A “whole-of-society” approach to mitigating systemic risks to election integrity in the EU implies that all Member States, represented by their DSCs, actively contribute to and support measures that regulate electoral processes taking place on their territory.

The full version of Peukert‘s contribution to the public consultation on Guidelines for the Mitigation of Systemic Risks for Electoral Processes is available here.


SUGGESTED CITATION  Peukert, Alexander: Risky Recommendations: The European Commission Recommends Big Tech Measures to Mitigate “Systemic Risks” for Electoral Processes, VerfBlog, 2024/2/28, https://verfassungsblog.de/risky-recommendations/, DOI: 10.59704/6480fd3502f5750f.

One Comment

  1. Sean Wang Fri 1 Mar 2024 at 23:08 - Reply

    Hi there! Writing from the Integrity Institute here – would love to have a chat to discuss some of the points you brought up. EC’s recent recommendations did not give organizations such as ours moderating powers. We’re also fully independent from tech company funding and our membership is global. (In fact, we’re a fellow Stiftung Mercator grantee!)

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
DSA, European Commission, Guidelines, Internet, Soft Law